1,767 research outputs found
Compressed Sensing and Parallel Acquisition
Parallel acquisition systems arise in various applications in order to
moderate problems caused by insufficient measurements in single-sensor systems.
These systems allow simultaneous data acquisition in multiple sensors, thus
alleviating such problems by providing more overall measurements. In this work
we consider the combination of compressed sensing with parallel acquisition. We
establish the theoretical improvements of such systems by providing recovery
guarantees for which, subject to appropriate conditions, the number of
measurements required per sensor decreases linearly with the total number of
sensors. Throughout, we consider two different sampling scenarios -- distinct
(corresponding to independent sampling in each sensor) and identical
(corresponding to dependent sampling between sensors) -- and a general
mathematical framework that allows for a wide range of sensing matrices (e.g.,
subgaussian random matrices, subsampled isometries, random convolutions and
random Toeplitz matrices). We also consider not just the standard sparse signal
model, but also the so-called sparse in levels signal model. This model
includes both sparse and distributed signals and clustered sparse signals. As
our results show, optimal recovery guarantees for both distinct and identical
sampling are possible under much broader conditions on the so-called sensor
profile matrices (which characterize environmental conditions between a source
and the sensors) for the sparse in levels model than for the sparse model. To
verify our recovery guarantees we provide numerical results showing phase
transitions for a number of different multi-sensor environments.Comment: 43 pages, 4 figure
Uniform Recovery from Subgaussian Multi-Sensor Measurements
Parallel acquisition systems are employed successfully in a variety of
different sensing applications when a single sensor cannot provide enough
measurements for a high-quality reconstruction. In this paper, we consider
compressed sensing (CS) for parallel acquisition systems when the individual
sensors use subgaussian random sampling. Our main results are a series of
uniform recovery guarantees which relate the number of measurements required to
the basis in which the solution is sparse and certain characteristics of the
multi-sensor system, known as sensor profile matrices. In particular, we derive
sufficient conditions for optimal recovery, in the sense that the number of
measurements required per sensor decreases linearly with the total number of
sensors, and demonstrate explicit examples of multi-sensor systems for which
this holds. We establish these results by proving the so-called Asymmetric
Restricted Isometry Property (ARIP) for the sensing system and use this to
derive both nonuniversal and universal recovery guarantees. Compared to
existing work, our results not only lead to better stability and robustness
estimates but also provide simpler and sharper constants in the measurement
conditions. Finally, we show how the problem of CS with block-diagonal sensing
matrices can be viewed as a particular case of our multi-sensor framework.
Specializing our results to this setting leads to a recovery guarantee that is
at least as good as existing results.Comment: 37 pages, 5 figure
Deep BCD-Net Using Identical Encoding-Decoding CNN Structures for Iterative Image Recovery
In "extreme" computational imaging that collects extremely undersampled or
noisy measurements, obtaining an accurate image within a reasonable computing
time is challenging. Incorporating image mapping convolutional neural networks
(CNN) into iterative image recovery has great potential to resolve this issue.
This paper 1) incorporates image mapping CNN using identical convolutional
kernels in both encoders and decoders into a block coordinate descent (BCD)
signal recovery method and 2) applies alternating direction method of
multipliers to train the aforementioned image mapping CNN. We refer to the
proposed recurrent network as BCD-Net using identical encoding-decoding CNN
structures. Numerical experiments show that, for a) denoising low
signal-to-noise-ratio images and b) extremely undersampled magnetic resonance
imaging, the proposed BCD-Net achieves significantly more accurate image
recovery, compared to BCD-Net using distinct encoding-decoding structures
and/or the conventional image recovery model using both wavelets and total
variation.Comment: 5 pages, 3 figure
Convolutional Dictionary Learning: Acceleration and Convergence
Convolutional dictionary learning (CDL or sparsifying CDL) has many
applications in image processing and computer vision. There has been growing
interest in developing efficient algorithms for CDL, mostly relying on the
augmented Lagrangian (AL) method or the variant alternating direction method of
multipliers (ADMM). When their parameters are properly tuned, AL methods have
shown fast convergence in CDL. However, the parameter tuning process is not
trivial due to its data dependence and, in practice, the convergence of AL
methods depends on the AL parameters for nonconvex CDL problems. To moderate
these problems, this paper proposes a new practically feasible and convergent
Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The
BPG-M-based CDL is investigated with different block updating schemes and
majorization matrix designs, and further accelerated by incorporating some
momentum coefficient formulas and restarting techniques. All of the methods
investigated incorporate a boundary artifacts removal (or, more generally,
sampling) operator in the learning model. Numerical experiments show that,
without needing any parameter tuning process, the proposed BPG-M approach
converges more stably to desirable solutions of lower objective values than the
existing state-of-the-art ADMM algorithm and its memory-efficient variant do.
Compared to the ADMM approaches, the BPG-M method using a multi-block updating
scheme is particularly useful in single-threaded CDL algorithm handling large
datasets, due to its lower memory requirement and no polynomial computational
complexity. Image denoising experiments show that, for relatively strong
additive white Gaussian noise, the filters learned by BPG-M-based CDL
outperform those trained by the ADMM approach.Comment: 21 pages, 7 figures, submitted to IEEE Transactions on Image
Processin
Sparsity and Parallel Acquisition: Optimal Uniform and Nonuniform Recovery Guarantees
The problem of multiple sensors simultaneously acquiring measurements of a
single object can be found in many applications. In this paper, we present the
optimal recovery guarantees for the recovery of compressible signals from
multi-sensor measurements using compressed sensing. In the first half of the
paper, we present both uniform and nonuniform recovery guarantees for the
conventional sparse signal model in a so-called distinct sensing scenario. In
the second half, using the so-called sparse and distributed signal model, we
present nonuniform recovery guarantees which effectively broaden the class of
sensing scenarios for which optimal recovery is possible, including to the
so-called identical sampling scenario. To verify our recovery guarantees we
provide several numerical results including phase transition curves and
numerically-computed bounds.Comment: 13 pages and 3 figure
Convolutional Analysis Operator Learning: Dependence on Training Data
Convolutional analysis operator learning (CAOL) enables the unsupervised
training of (hierarchical) convolutional sparsifying operators or autoencoders
from large datasets. One can use many training images for CAOL, but a precise
understanding of the impact of doing so has remained an open question. This
paper presents a series of results that lend insight into the impact of dataset
size on the filter update in CAOL. The first result is a general deterministic
bound on errors in the estimated filters, and is followed by a bound on the
expected errors as the number of training samples increases. The second result
provides a high probability analogue. The bounds depend on properties of the
training data, and we investigate their empirical values with real data. Taken
together, these results provide evidence for the potential benefit of using
more training data in CAOL.Comment: 5 pages, 2 figure
Convolutional Analysis Operator Learning: Acceleration and Convergence
Convolutional operator learning is gaining attention in many signal
processing and computer vision applications. Learning kernels has mostly relied
on so-called patch-domain approaches that extract and store many overlapping
patches across training signals. Due to memory demands, patch-domain methods
have limitations when learning kernels from large datasets -- particularly with
multi-layered structures, e.g., convolutional neural networks -- or when
applying the learned kernels to high-dimensional signal recovery problems. The
so-called convolution approach does not store many overlapping patches, and
thus overcomes the memory problems particularly with careful algorithmic
designs; it has been studied within the "synthesis" signal model, e.g.,
convolutional dictionary learning. This paper proposes a new convolutional
analysis operator learning (CAOL) framework that learns an analysis sparsifying
regularizer with the convolution perspective, and develops a new convergent
Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve
the corresponding block multi-nonconvex problems. To learn diverse filters
within the CAOL framework, this paper introduces an orthogonality constraint
that enforces a tight-frame filter condition, and a regularizer that promotes
diversity between filters. Numerical experiments show that, with sharp
majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared
to the state-of-the-art block proximal gradient (BPG) method. Numerical
experiments for sparse-view computational tomography show that a convolutional
sparsifying regularizer learned via CAOL significantly improves reconstruction
quality compared to a conventional edge-preserving regularizer. Using more and
wider kernels in a learned regularizer better preserves edges in reconstructed
images.Comment: 22 pages, 11 figures, fixed incorrect math theorem numbers in fig.
Momentum-Net: Fast and convergent iterative neural network for inverse problems
Iterative neural networks (INN) are rapidly gaining attention for solving
inverse problems in imaging, image processing, and computer vision. INNs
combine regression NNs and an iterative model-based image reconstruction (MBIR)
algorithm, often leading to both good generalization capability and
outperforming reconstruction quality over existing MBIR optimization models.
This paper proposes the first fast and convergent INN architecture,
Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentum
and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum
terms in extrapolation modules, and noniterative MBIR modules at each iteration
by using majorizers, where each iteration of Momentum-Net consists of three
core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees
convergence to a fixed-point for general differentiable (non)convex MBIR
functions (or data-fit terms) and convex feasible sets, under two asymptomatic
conditions. To consider data-fit variations across training and testing
samples, we also propose a regularization parameter selection scheme based on
the "spectral spread" of majorization matrices. Numerical experiments for
light-field photography using a focal stack and sparse-view computational
tomography demonstrate that, given identical regression NN architectures,
Momentum-Net significantly improves MBIR speed and accuracy over several
existing INNs; it significantly improves reconstruction quality compared to a
state-of-the-art MBIR method in each application.Comment: 28 pages, 13 figures, 3 algorithms, 4 tables, submitted revision to
IEEE T-PAM
- …